82 research outputs found
Variational Inference of Joint Models using Multivariate Gaussian Convolution Processes
We present a non-parametric prognostic framework for individualized event
prediction based on joint modeling of both longitudinal and time-to-event data.
Our approach exploits a multivariate Gaussian convolution process (MGCP) to
model the evolution of longitudinal signals and a Cox model to map
time-to-event data with longitudinal data modeled through the MGCP. Taking
advantage of the unique structure imposed by convolved processes, we provide a
variational inference framework to simultaneously estimate parameters in the
joint MGCP-Cox model. This significantly reduces computational complexity and
safeguards against model overfitting. Experiments on synthetic and real world
data show that the proposed framework outperforms state-of-the art approaches
built on two-stage inference and strong parametric assumptions
The Renyi Gaussian Process: Towards Improved Generalization
We introduce an alternative closed form lower bound on the Gaussian process
() likelihood based on the R\'enyi -divergence. This new
lower bound can be viewed as a convex combination of the Nystr\"om
approximation and the exact . The key advantage of this bound, is
its capability to control and tune the enforced regularization on the model and
thus is a generalization of the traditional variational
regression. From a theoretical perspective, we provide the convergence rate and
risk bound for inference using our proposed approach. Experiments on real data
show that the proposed algorithm may be able to deliver improvement over
several inference methods
Why Non-myopic Bayesian Optimization is Promising and How Far Should We Look-ahead? A Study via Rollout
Lookahead, also known as non-myopic, Bayesian optimization (BO) aims to find
optimal sampling policies through solving a dynamic programming (DP)
formulation that maximizes a long-term reward over a rolling horizon. Though
promising, lookahead BO faces the risk of error propagation through its
increased dependence on a possibly mis-specified model. In this work we focus
on the rollout approximation for solving the intractable DP. We first prove the
improving nature of rollout in tackling lookahead BO and provide a sufficient
condition for the used heuristic to be rollout improving. We then provide both
a theoretical and practical guideline to decide on the rolling horizon
stagewise. This guideline is built on quantifying the negative effect of a
mis-specified model. To illustrate our idea, we provide case studies on both
single and multi-information source BO. Empirical results show the advantageous
properties of our method over several myopic and non-myopic BO algorithms.Comment: 12 pages, 1 figure Accepted by AISTATS 202
SALR: Sharpness-aware Learning Rate Scheduler for Improved Generalization
In an effort to improve generalization in deep learning and automate the
process of learning rate scheduling, we propose SALR: a sharpness-aware
learning rate update technique designed to recover flat minimizers. Our method
dynamically updates the learning rate of gradient-based optimizers based on the
local sharpness of the loss function. This allows optimizers to automatically
increase learning rates at sharp valleys to increase the chance of escaping
them. We demonstrate the effectiveness of SALR when adopted by various
algorithms over a broad range of networks. Our experiments indicate that SALR
improves generalization, converges faster, and drives solutions to
significantly flatter regions
Ab initio uncertainty quantification in scattering analysis of microscopy
Estimating parameters from data is a fundamental problem in physics,
customarily done by minimizing a loss function between a model and observed
statistics. In scattering-based analysis, researchers often employ their domain
expertise to select a specific range of wavevectors for analysis, a choice that
can vary depending on the specific case. We introduce another paradigm that
defines a probabilistic generative model from the beginning of data processing
and propagates the uncertainty for parameter estimation, termed ab initio
uncertainty quantification (AIUQ). As an illustrative example, we demonstrate
this approach with differential dynamic microscopy (DDM) that extracts
dynamical information through Fourier analysis at a selected range of
wavevectors. We first show that DDM is equivalent to fitting a temporal
variogram in the reciprocal space using a latent factor model as the generative
model. Then we derive the maximum marginal likelihood estimator, which
optimally weighs information at all wavevectors, therefore eliminating the need
to select the range of wavevectors. Furthermore, we substantially reduce the
computational cost by utilizing the generalized Schur algorithm for Toeplitz
covariances without approximation. Simulated studies validate that AIUQ
significantly improves estimation accuracy and enables model selection with
automated analysis. The utility of AIUQ is also demonstrated by three distinct
sets of experiments: first in an isotropic Newtonian fluid, pushing limits of
optically dense systems compared to multiple particle tracking; next in a
system undergoing a sol-gel transition, automating the determination of gelling
points and critical exponent; and lastly, in discerning anisotropic diffusive
behavior of colloids in a liquid crystal. These outcomes collectively
underscore AIUQ's versatility to capture system dynamics in an efficient and
automated manner
Origin and tuning of the magnetocaloric effect for the magnetic refrigerant MnFe(P1-xGex)
Neutron diffraction and magnetization measurements of the magneto refrigerant
Mn1+yFe1-yP1-xGex reveal that the ferromagnetic and paramagnetic phases
correspond to two very distinct crystal structures, with the magnetic entropy
change as a function of magnetic field or temperature being directly controlled
by the phase fraction of this first-order transition. By tuning the physical
properties of this system we have achieved a maximum magnetic entropy change
exceeding 74 J/Kg K for both increasing and decreasing field, more than twice
the value of the previous record.Comment: 6 Figures. One tabl
Potential of Core-Collapse Supernova Neutrino Detection at JUNO
JUNO is an underground neutrino observatory under construction in Jiangmen, China. It uses 20kton liquid scintillator as target, which enables it to detect supernova burst neutrinos of a large statistics for the next galactic core-collapse supernova (CCSN) and also pre-supernova neutrinos from the nearby CCSN progenitors. All flavors of supernova burst neutrinos can be detected by JUNO via several interaction channels, including inverse beta decay, elastic scattering on electron and proton, interactions on C12 nuclei, etc. This retains the possibility for JUNO to reconstruct the energy spectra of supernova burst neutrinos of all flavors. The real time monitoring systems based on FPGA and DAQ are under development in JUNO, which allow prompt alert and trigger-less data acquisition of CCSN events. The alert performances of both monitoring systems have been thoroughly studied using simulations. Moreover, once a CCSN is tagged, the system can give fast characterizations, such as directionality and light curve
Detection of the Diffuse Supernova Neutrino Background with JUNO
As an underground multi-purpose neutrino detector with 20 kton liquid scintillator, Jiangmen Underground Neutrino Observatory (JUNO) is competitive with and complementary to the water-Cherenkov detectors on the search for the diffuse supernova neutrino background (DSNB). Typical supernova models predict 2-4 events per year within the optimal observation window in the JUNO detector. The dominant background is from the neutral-current (NC) interaction of atmospheric neutrinos with 12C nuclei, which surpasses the DSNB by more than one order of magnitude. We evaluated the systematic uncertainty of NC background from the spread of a variety of data-driven models and further developed a method to determine NC background within 15\% with {\it{in}} {\it{situ}} measurements after ten years of running. Besides, the NC-like backgrounds can be effectively suppressed by the intrinsic pulse-shape discrimination (PSD) capabilities of liquid scintillators. In this talk, I will present in detail the improvements on NC background uncertainty evaluation, PSD discriminator development, and finally, the potential of DSNB sensitivity in JUNO
Real-time Monitoring for the Next Core-Collapse Supernova in JUNO
Core-collapse supernova (CCSN) is one of the most energetic astrophysical
events in the Universe. The early and prompt detection of neutrinos before
(pre-SN) and during the SN burst is a unique opportunity to realize the
multi-messenger observation of the CCSN events. In this work, we describe the
monitoring concept and present the sensitivity of the system to the pre-SN and
SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is
a 20 kton liquid scintillator detector under construction in South China. The
real-time monitoring system is designed with both the prompt monitors on the
electronic board and online monitors at the data acquisition stage, in order to
ensure both the alert speed and alert coverage of progenitor stars. By assuming
a false alert rate of 1 per year, this monitoring system can be sensitive to
the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos
up to about 370 (360) kpc for a progenitor mass of 30 for the case
of normal (inverted) mass ordering. The pointing ability of the CCSN is
evaluated by using the accumulated event anisotropy of the inverse beta decay
interactions from pre-SN or SN neutrinos, which, along with the early alert,
can play important roles for the followup multi-messenger observations of the
next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure
Robust estimation of bacterial cell count from optical density
Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data
- …